35 research outputs found

    Sixty Years of Fractal Projections

    Get PDF
    Sixty years ago, John Marstrand published a paper which, among other things, relates the Hausdorff dimension of a plane set to the dimensions of its orthogonal projections onto lines. For many years, the paper attracted very little attention. However, over the past 30 years, Marstrand's projection theorems have become the prototype for many results in fractal geometry with numerous variants and applications and they continue to motivate leading research.Comment: Submitted to proceedings of Fractals and Stochastics

    Distance sets, orthogonal projections, and passing to weak tangents

    Get PDF
    The author is supported by a Leverhulme Trust Research Fellowship (RF-2016-500).We consider the Assouad dimension analogues of two important problems in geometric measure theory. These problems are tied together by the common theme of ‘passing to weak tangents’. First, we solve the analogue of Falconer’s distance set problem for Assouad dimension in the plane: if a planar set has Assouad dimension greater than 1, then its distance set has Assouad dimension 1. We also obtain partial results in higher dimensions. Second, we consider how Assouad dimension behaves under orthogonal projection. We extend the planar projection theorem of Fraser and Orponen to higher dimensions, provide estimates on the (Hausdorff) dimension of the exceptional set of projections, and provide a recipe for obtaining results about restricted families of projections. We provide several illustrative examples throughout.PostprintPeer reviewe

    A Survey on Continuous Time Computations

    Full text link
    We provide an overview of theories of continuous time computation. These theories allow us to understand both the hardness of questions related to continuous time dynamical systems and the computational power of continuous time analog models. We survey the existing models, summarizing results, and point to relevant references in the literature

    Self-similar sets: projections, sections and percolation

    Get PDF
    We survey some recent results on the dimension of orthogonal projections of self-similar sets and of random subsets obtained by percolation on self-similar sets. In particular we highlight conditions when the dimension of the projections takes the generic value for all, or very nearly all, projections. We then describe a method for deriving dimensional properties of sections of deterministic self-similar sets by utilising projection properties of random percolation subsets.Postprin

    Clusters of solutions and replica symmetry breaking in random k-satisfiability

    Full text link
    We study the set of solutions of random k-satisfiability formulae through the cavity method. It is known that, for an interval of the clause-to-variables ratio, this decomposes into an exponential number of pure states (clusters). We refine substantially this picture by: (i) determining the precise location of the clustering transition; (ii) uncovering a second `condensation' phase transition in the structure of the solution set for k larger or equal than 4. These results both follow from computing the large deviation rate of the internal entropy of pure states. From a technical point of view our main contributions are a simplified version of the cavity formalism for special values of the Parisi replica symmetry breaking parameter m (in particular for m=1 via a correspondence with the tree reconstruction problem) and new large-k expansions.Comment: 30 pages, 14 figures, typos corrected, discussion of appendix C expanded with a new figur

    On the Bounds of Function Approximations

    Full text link
    Within machine learning, the subfield of Neural Architecture Search (NAS) has recently garnered research attention due to its ability to improve upon human-designed models. However, the computational requirements for finding an exact solution to this problem are often intractable, and the design of the search space still requires manual intervention. In this paper we attempt to establish a formalized framework from which we can better understand the computational bounds of NAS in relation to its search space. For this, we first reformulate the function approximation problem in terms of sequences of functions, and we call it the Function Approximation (FA) problem; then we show that it is computationally infeasible to devise a procedure that solves FA for all functions to zero error, regardless of the search space. We show also that such error will be minimal if a specific class of functions is present in the search space. Subsequently, we show that machine learning as a mathematical problem is a solution strategy for FA, albeit not an effective one, and further describe a stronger version of this approach: the Approximate Architectural Search Problem (a-ASP), which is the mathematical equivalent of NAS. We leverage the framework from this paper and results from the literature to describe the conditions under which a-ASP can potentially solve FA as well as an exhaustive search, but in polynomial time.Comment: Accepted as a full paper at ICANN 2019. The final, authenticated publication will be available at https://doi.org/10.1007/978-3-030-30487-4_3

    Computation of the multivariate

    No full text
    be inserted by the editor
    corecore